Making risk minimization tolerant to label noise

نویسندگان

  • Aritra Ghosh
  • Naresh Manwani
  • P. S. Sastry
چکیده

In many applications, the training data, from which one needs to learn a classifier, is corrupted with label noise. Many standard algorithms such as SVM perform poorly in presence of label noise. In this paper we investigate the robustness of risk minimization to label noise. We prove a sufficient condition on a loss function for the risk minimization under that loss to be tolerant to uniform label noise. We show that the 0 − 1 loss, sigmoid loss, ramp loss and probit loss satisfy this condition though none of the standard convex loss functions satisfy it. We also prove that, by choosing a sufficiently large value of a parameter in the loss function, the sigmoid loss, ramp loss and probit loss can be made tolerant to non-uniform label noise also if we can assume the classes to be separable under noise-free data distribution. Through extensive empirical studies, we show that risk minimization under the 0 − 1 loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Robust Loss Functions under Label Noise for Deep Neural Networks

In many applications of classifier learning, training data suffers from label noise. Deep networks are learned using huge training data where the problem of noisy labels is particularly relevant. The current techniques proposed for learning deep networks under label noise focus on modifying the network architecture and on algorithms for estimating true labels from noisy labels. An alternate app...

متن کامل

Learning with Noisy Labels

In this paper, we theoretically study the problem of binary classification in the presence of random classification noise — the learner, instead of seeing the true labels, sees labels that have independently been flipped with some small probability. Moreover, random label noise is class-conditional — the flip probability depends on the class. We provide two approaches to suitably modify any giv...

متن کامل

Risk Minimization in the Presence of Label Noise

Matrix concentration inequalities have attracted much attention in diverse applications such as linear algebra, statistical estimation, combinatorial optimization, etc. In this paper, we present new Bernstein concentration inequalities depending only on the first moments of random matrices, whereas previous Bernstein inequalities are heavily relevant to the first and second moments. Based on th...

متن کامل

Eecient Noise-tolerant Learning from Statistical Queries

In this paper, we study the extension of Valiant's learning model [32] in which the positive or negative classi cation label provided with each random example may be corrupted by random noise. This extension was rst examined in the learning theory literature by Angluin and Laird [1], who formalized the simplest type of white label noise and then sought algorithms tolerating the highest possible...

متن کامل

Low Dropout Based Noise Minimization of Active Mode Power Gated Circuit

Power gating technique reduces leakage power in the circuit. However, power gating leads to large voltage fluctuation on the power rail during power gating mode to active mode due to the package inductance in the Printed Circuit Board. This voltage fluctuation may cause unwanted transitions in neighboring circuits. In this work, a power gating architecture is developed for minimizing power in a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neurocomputing

دوره 160  شماره 

صفحات  -

تاریخ انتشار 2015